602 research outputs found
Revive, Restore, Revitalize: An Eco-economic Methodology for Maasai Mara
The Maasai Mara in Kenya, renowned for its biodiversity, is witnessing
ecosystem degradation and species endangerment due to intensified human
activities. Addressing this, we introduce a dynamic system harmonizing
ecological and human priorities. Our agent-based model replicates the Maasai
Mara savanna ecosystem, incorporating 71 animal species, 10 human
classifications, and 2 natural resource types. The model employs the metabolic
rate-mass relationship for animal energy dynamics, logistic curves for animal
growth, individual interactions for food web simulation, and human intervention
impacts. Algorithms like fitness proportional selection and particle swarm
mimic organism preferences for resources. To guide preservation activities, we
formulated 21 management strategies encompassing tourism, transportation,
taxation, environmental conservation, research, diplomacy, and poaching,
employing a game-theoretic framework. Using the TOPSIS method, we prioritized
four key developmental indicators: environmental health, research advancement,
economic growth, and security. The interplay of 16 factors determines these
indicators, each influenced by our policies to varying degrees. By evaluating
the policies' repercussions, we aim to mitigate adverse animal-human
interactions and equitably address human concerns. We classified the policy
impacts into three categories: Environmental Preservation, Economic Prosperity,
and Holistic Development. By applying these policy groupings to our ecosystem
model, we tracked the effects on the intricate animal-human-resource dynamics.
Utilizing the entropy weight method, we assessed the efficacy of these policy
clusters over a decade, identifying the optimal blend emphasizing both
environmental conservation and economic progression.Comment: 25 pages, 16 figure
Research on the impact of asteroid mining on global equity
In the future situation, aiming to seek more resources, human beings decided
to march towards the mysterious and bright starry sky, which opened the era of
great interstellar exploration. According to the Outer Space Treaty, any
exploration of celestial bodies should be aimed at promoting global equality
and for the benefit of all nations. Firstly, we defined global equity and set a
Unified Equity Index (UEI) model to measure it. We merge the factors with
greater correlation, and finally, get 6 elements, and then use the entropy
method (TEM) to find the dispersion of these elements in different countries.
Then use principal component analysis (PCA) to reduce the dimensionality of the
dispersion, and then use the scandalized index to obtain the global equity.
Secondly, we simulated a future with asteroid mining and evaluated its impact
on Unified Equity Index (UEI). Then, we divided the mineable asteroids into
three classes with different mining difficulties and values, identified 28
mining entities including private companies, national and international
organizations. We considered changes in the asteroid classes, mining
capabilities and mining scales to determine the changes in the value of
minerals mined between 2025 and 2085. We convert mining output value into
mineral transaction value through allocation matrix. Based on grey relational
analysis (GRA). Finally, we presented three possible versions of the future of
asteroid mining by changing the conditions. We propose two sets of
corresponding policies for changes in future trends in global fairness with
asteroid mining. We test the separate and combined effects of these policies
and find that they are positive, strongly supporting the effectiveness of our
model.Comment: 19 page
Recommended from our members
Large Scale Nearest Neighbor Search - Theories, Algorithms, and Applications
We are witnessing a data explosion era, in which huge data sets of billions or more samples represented by high-dimensional feature vectors can be easily found on the Web, enterprise data centers, surveillance sensor systems, and so on. On these large scale data sets, nearest neighbor search is fundamental for lots of applications including content based search/retrieval, recommendation, clustering, graph and social network research, as well as many other machine learning and data mining problems.
Exhaustive search is the simplest and most straightforward way for nearest neighbor search, but it can not scale up to huge data set at the sizes as mentioned above. To make large scale nearest neighbor search practical, we need the online search step to be sublinear in terms of the database size, which means offline indexing is necessary. Moreover, to achieve sublinear search time, we usually need to make some sacrifice on the search accuracy, and hence we can often only obtain approximate nearest neighbor instead of exact nearest neighbor. In other words, by large scale nearest neighbor search, we aim at approximate nearest neighbor search methods with sublinear online search time via offline indexing.
To some extent, indexing a vector dataset for (sublinear time) approximate search can be achieved by partitioning the feature space to different regions, and mapping each point to its closet regions. There are different kinds of partition structures, for example, tree based partition, hashing based partition, clustering/quantization based partition, etc. From the viewpoint of how the data partition function is generated, the partition methods can be grouped into two main categories: 1. data independent (random) partition such as locality sensitive hashing, randomized trees/forests methods, etc.; 2. data dependent (optimized) partition, such as compact hashing, quantization based indexing methods, and some tree based methods like kd-tree, pca tree, etc.
With the offline indexing/partitioning, online approximate nearest neighbor search usually consists of three steps: locate the query region that the query point falls in, obtain candidates which are the database points in the regions near the query region, and rerank/return candidates. For large scale nearest neighbor search, the key question is: how to design the optimal offline indexing, such that the online search performance is the best, or more specifically, the online search can be as fast as possible, while meeting a required accuracy?
In this thesis, we have studied theories, algorithms, systems and applications for (approximate) nearest neighbor search on large scale data sets, for both indexing with random partition and indexing with learning based partition.
Our specific main contributions are:
1. We unify various nearest neighbor search methods into the data partition framework, and provide a general formulation of optimal data partition, which supports fastest search speed while satisfying a required search accuracy. The formulation is general, and can be used to explain most existing (sublinear) large scale approximate nearest neighbor search methods.
2. For indexing with data-independent partitions, we have developed theories on their lower and upper bounds of time and space complexity, based on the optimal data partition formulation. The bounds are applicable for a general group of methods called Nearest Neighbor Preferred Hashing and Nearest Neighbor Preferred Partition, including, locality sensitive hashing, random forest, and many other random hashing methods, etc. Moreover, we also extend the theory to study how to choose the parameters for indexing methods with random partitions.
3. For indexing with data-dependent partitions, I have applied the same formulation to develop a joint optimization approach with two important criteria: nearest neighbor preserving and region size balancing. we have applied the joint optimization to different partition structures such as hashing and clustering, and achieved several new nearest neighbor search methods, outperforming (or at least comparable) to state-of-the-art solutions for large scale nearest neighbor search.
4. we have further studied fundamental problems for nearest neighbor search beyond search methods, for example, what is the difficulty of nearest neighbor search on a given data set (independent of search methods)? What data properties affect the difficulty and how? How will the theoretical analysis and algorithm design of large scale nearest neighbor search problem be affected by the data set difficulty?
5. Finally, we have applied our nearest neighbor search methods for practical applications. We focus on the development of large visual search engines using new indexing methods developed in this thesis. The techniques can be applied to other domains with data-intensive applications, and moreover, be extended to other applications beyond visual search engine, such as large scale machine learning, data mining, and social network analysis, etc
Simulation Study on Material Property of Cantilever Piezoelectric Vibration Generator
Abstract: For increasing generating capacity of cantilever piezoelectric vibration generator with limited volume, relation between output voltage, inherent frequency and material parameter of unimorph, bimorph in series type and bimorph in parallel type piezoelectric vibration generator is analyzed respectively by mechanical model and finite element modeling. The results indicate PZT-4, PZT-5A and PZT-5H piezoelectric materials and stainless steel, nickel alloy substrate material should be firstly chosen. Copyright © 2014 IFSA Publishing, S. L
- …